3 research outputs found

    Multi-Fidelity Gaussian Process Emulation And Its Application In The Study Of Tsunami Risk Modelling

    Get PDF
    Investigating uncertainties in computer simulations can be prohibitive in terms of computational costs, since the simulator needs to be run over a large number of input values. Building a statistical surrogate model of the simulator, using a small design of experiments, greatly alleviates the computational burden to carry out such investigations. Nevertheless, this can still be above the computational budget for many studies. We present a novel method that combines both approaches, the multilevel adaptive sequential design of computer experiments (MLASCE) in the framework of Gaussian process (GP) emulators. MLASCE is based on the two major approaches: efficient design of experiments, such as sequential designs, and combining training data of different degrees of sophistication in a so-called multi-fidelity method, or multilevel in case these fidelities are ordered typically for increasing resolutions. This dual strategy allows us to allocate efficiently limited computational resources over simulations of different levels of fidelity and build the GP emulator. The allocation of computational resources is shown to be the solution of a simple optimization problem in a special case where we theoretically prove the validity of our approach. MLASCE is compared with other existing models of multi-fidelity Gaussian process emulation. Gains of orders of magnitudes in accuracy for medium-size computing budgets are demonstrated in numerical examples. MLASCE should be useful in a computer experiment of a natural disaster risk and more than a mere tool for calculating the scale of natural disasters. To show MLASCE meets this expectation, we propose the first end-to-end example of a risk model for household asset loss due to a possible future tsunami. As a follow-up to this proposed framework, MLASCE provides a reliable statistical surrogate to a realistic tsunami risk assessment under a restricted computational resource and provides accurate and instant predictions of future tsunami risks

    Multi-level emulation of tsunami simulations over Cilacap, South Java, Indonesia

    Get PDF
    Carrying out a Probabilistic Tsunami Hazard Assessment (PTHA) requires a large number of simulations done at a high resolution. Statistical emulation builds a surrogate to replace the simulator and thus reduces computational costs when propagating uncertainties from the earthquake sources to the tsunami inundations. To reduce further these costs, we propose here to build emulators that exploit multiple levels of resolution and a sequential design of computer experiments. By running a few tsunami simulations at high resolution and many more simulations at lower resolutions we are able to provide realistic assessments whereas, for the same budget, using only the high resolution tsunami simulations do not provide a satisfactory outcome. As a result, PTHA can be considered with higher precision using the highest spatial resolutions, and for impacts over larger regions. We provide an illustration to the city of Cilacap in Indonesia that demonstrates the benefit of our approach

    An adaptive strategy for sequential designs of multilevel computer experiments

    No full text
    Investigating uncertainties in computer simulations can be prohibitive in terms of computational costs, since the simulator needs to be run over a large number of input values. Building an emulator, i.e., a statistical surrogate model of the simulator constructed using a design of experiments made of a comparatively small number of evaluations of the forward solver, greatly alleviates the computational burden to carry out such investigations. Nevertheless, this can still be above the computational budget for many studies. Two major approaches have been used to reduce the budget needed to build the emulator: efficient design of experiments, such as sequential designs, and combining training data of different degrees of sophistication in a so-called multifidelity method, or multilevel in case these fidelities are ordered typically for increasing resolutions. We present here a novel method that combines both approaches, the multilevel adaptive sequential design of computer experiments in the framework of Gaussian process (GP) emulators. We make use of reproducing kernel Hilbert spaces as a tool for our GP approximations of the differences between two consecutive levels. This dual strategy allows us to allocate efficiently limited computational resources over simulations of different levels of fidelity and build the GP emulator. The allocation of computational resources is shown to be the solution of a simple optimization problem in a special case where we theoretically prove the validity of our approach. Our proposed method is compared to other existing models of multifidelity Gaussian process emulation. Gains in orders of magnitudes in accuracy or computing budgets are demonstrated in some numerical examples for some settings
    corecore